11 research outputs found

    Synthesis and Implementation (In STM8S) of Phased Circular Antenna Arrays Using Taguchi Method

    Get PDF
    This paper is aimed at assessing the effectiveness of the phase-only control strategy based on a customized Taguchi method when applied to Uniform Circular Arrays (UCA).  The objective of this paper consists to contribute the main lobe optimization of the smart antenna using Taguchi's method. We used the cited method in order to determine phase's weights for each element of the circular antenna array in order to steer the principal lobe from -65° to 65° covering all angular space. After that, we made an electronic platform using the microcontroller STM8S in order to implement an intelligent system. The architecture of this work had used a digital phase shifters, a demodulator AD8347, a modulator AD8349, an array antenna, cards STM8S-Discovery

    Dynamic Resource Management in Integrated NOMA Terrestrial-Satellite Networks using Multi-Agent Reinforcement Learning

    Full text link
    This study introduces a resource allocation framework for integrated satellite-terrestrial networks to address these challenges. The framework leverages local cache pool deployments and non-orthogonal multiple access (NOMA) to reduce time delays and improve energy efficiency. Our proposed approach utilizes a multi-agent enabled deep deterministic policy gradient algorithm (MADDPG) to optimize user association, cache design, and transmission power control, resulting in enhanced energy efficiency. The approach comprises two phases: User Association and Power Control, where users are treated as agents, and Cache Optimization, where the satellite (Bs) is considered the agent. Through extensive simulations, we demonstrate that our approach surpasses conventional single-agent deep reinforcement learning algorithms in addressing cache design and resource allocation challenges in integrated terrestrial-satellite networks. Specifically, our proposed approach achieves significantly higher energy efficiency and reduced time delays compared to existing methods.Comment: 16, 1

    An efficient scheme for interference mitigation in 6G-IoT wireless networks

    Get PDF
    The Internet of Things (IoT) is the fourth technological revolution in the global information industry after computers, the Internet, and mobile communication networks. It combines radio-frequency identification devices, infrared sensors, global positioning systems, and various other technologies. Information sensing equipment is connected via the Internet, thus forming a vast network. When these physical devices are connected to the Internet, the user terminal can be extended and expanded to exchange information, communicate with anything, and carry out identification, positioning, tracking, monitoring, and triggering of corresponding events on each device in the network. In real life, the IoT has a wide range of applications, covering many fields, such as smart homes, smart logistics, fine agriculture and animal husbandry, national defense, and military. One of the most significant factors in wireless channels is interference, which degrades the system performance. Although the existing QR decomposition-based signal detection method is an emerging topic because of its low complexity, it does not solve the problem of poor detection performance. Therefore, this study proposes a maximum-likelihood-based QR decomposition algorithm. The main idea is to estimate the initial level of detection using the maximum likelihood principle, and then the other layer is detected using a reliable decision. The optimal candidate is selected from the feedback by deploying the candidate points in an unreliable scenario. Simulation results show that the proposed algorithm effectively reduces the interference and propagation error compared with the algorithms reported in the literature

    Computational Intelligence with Wild Horse Optimization Based Object Recognition and Classification Model for Autonomous Driving Systems

    No full text
    Presently, autonomous systems have gained considerable attention in several fields such as transportation, healthcare, autonomous driving, logistics, etc. It is highly needed to ensure the safe operations of the autonomous system before launching it to the general public. Since the design of a completely autonomous system is a challenging process, perception and decision-making act as vital parts. The effective detection of objects on the road under varying scenarios can considerably enhance the safety of autonomous driving. The recently developed computational intelligence (CI) and deep learning models help to effectively design the object detection algorithms for environment perception depending upon the camera system that exists in the autonomous driving systems. With this motivation, this study designed a novel computational intelligence with a wild horse optimization-based object recognition and classification (CIWHO-ORC) model for autonomous driving systems. The proposed CIWHO-ORC technique intends to effectively identify the presence of multiple static and dynamic objects such as vehicles, pedestrians, signboards, etc. Additionally, the CIWHO-ORC technique involves the design of a krill herd (KH) algorithm with a multi-scale Faster RCNN model for the detection of objects. In addition, a wild horse optimizer (WHO) with an online sequential ridge regression (OSRR) model was applied for the classification of recognized objects. The experimental analysis of the CIWHO-ORC technique is validated using benchmark datasets, and the obtained results demonstrate the promising outcome of the CIWHO-ORC technique in terms of several measures

    Explainable Artificial Intelligence Enabled TeleOphthalmology for Diabetic Retinopathy Grading and Classification

    No full text
    Recently, Telehealth connects patients to vital healthcare services via remote monitoring, wireless communications, videoconferencing, and electronic consults. By increasing access to specialists and physicians, telehealth assists in ensuring patients receive the proper care at the right time and right place. Teleophthalmology is a study of telemedicine that provides services for eye care using digital medical equipment and telecommunication technologies. Multimedia computing with Explainable Artificial Intelligence (XAI) for telehealth has the potential to revolutionize various aspects of our society, but several technical challenges should be resolved before this potential can be realized. Advances in artificial intelligence methods and tools reduce waste and wait times, provide service efficiency and better insights, and increase speed, the level of accuracy, and productivity in medicine and telehealth. Therefore, this study develops an XAI-enabled teleophthalmology for diabetic retinopathy grading and classification (XAITO-DRGC) model. The proposed XAITO-DRGC model utilizes OphthoAI IoMT headsets to enable remote monitoring of diabetic retinopathy (DR) disease. To accomplish this, the XAITO-DRGC model applies median filtering (MF) and contrast enhancement as a pre-processing step. In addition, the XAITO-DRGC model applies U-Net-based image segmentation and SqueezeNet-based feature extractor. Moreover, Archimedes optimization algorithm (AOA) with a bidirectional gated recurrent convolutional unit (BGRCU) is exploited for DR detection and classification. The experimental validation of the XAITO-DRGC method can be tested using a benchmark dataset and the outcomes are assessed under distinct prospects. Extensive comparison studies stated the enhancements of the XAITO-DRGC model over recent approaches

    Explainable Artificial Intelligence Enabled TeleOphthalmology for Diabetic Retinopathy Grading and Classification

    No full text
    Recently, Telehealth connects patients to vital healthcare services via remote monitoring, wireless communications, videoconferencing, and electronic consults. By increasing access to specialists and physicians, telehealth assists in ensuring patients receive the proper care at the right time and right place. Teleophthalmology is a study of telemedicine that provides services for eye care using digital medical equipment and telecommunication technologies. Multimedia computing with Explainable Artificial Intelligence (XAI) for telehealth has the potential to revolutionize various aspects of our society, but several technical challenges should be resolved before this potential can be realized. Advances in artificial intelligence methods and tools reduce waste and wait times, provide service efficiency and better insights, and increase speed, the level of accuracy, and productivity in medicine and telehealth. Therefore, this study develops an XAI-enabled teleophthalmology for diabetic retinopathy grading and classification (XAITO-DRGC) model. The proposed XAITO-DRGC model utilizes OphthoAI IoMT headsets to enable remote monitoring of diabetic retinopathy (DR) disease. To accomplish this, the XAITO-DRGC model applies median filtering (MF) and contrast enhancement as a pre-processing step. In addition, the XAITO-DRGC model applies U-Net-based image segmentation and SqueezeNet-based feature extractor. Moreover, Archimedes optimization algorithm (AOA) with a bidirectional gated recurrent convolutional unit (BGRCU) is exploited for DR detection and classification. The experimental validation of the XAITO-DRGC method can be tested using a benchmark dataset and the outcomes are assessed under distinct prospects. Extensive comparison studies stated the enhancements of the XAITO-DRGC model over recent approaches

    Empowering smart cities: High-altitude platforms based Mobile Edge Computing and Wireless Power Transfer for efficient IoT data processing

    No full text
    peer reviewedThis work presents an efficient framework that combines High Altitude Platform (HAP)-based Mobile Edge Computing (MEC) networks with Wireless Power Transfer (WPT) to optimize resource allocation and task offloading. With the proliferation of smart sensor nodes (IoT) generating real-time data, there is a pressing need to overcome device limitations, including finite battery life and computational resources. Our proposed framework leverages HAP-based MEC servers, offering on-demand computation and communication resources without extensive physical infrastructure. Additionally, WPT, through terrestrial networks, addresses IoT device battery constraints by enabling energy harvesting from nearby access points. The primary focus is joint optimization, aiming to maximize computing bits while minimizing energy consumption under system constraints. Given the optimization problem's complexity, we employ a decomposition approach, breaking it into sub-problems. The first part handles mode selection and task segmentation, determining optimal placement and mode selection variables. The second part addresses resource allocation, optimizing transmission power, offloading time, energy harvesting time, and device computational resources. Numerical results demonstrate the framework's effectiveness compared to relevant benchmark schemes. This approach holds promise for enhancing IoT device performance and energy efficiency in smart city applications

    Hyperparameter Optimizer with Deep Learning-Based Decision-Support Systems for Histopathological Breast Cancer Diagnosis

    No full text
    Histopathological images are commonly used imaging modalities for breast cancer. As manual analysis of histopathological images is difficult, automated tools utilizing artificial intelligence (AI) and deep learning (DL) methods should be modelled. The recent advancements in DL approaches will be helpful in establishing maximal image classification performance in numerous application zones. This study develops an arithmetic optimization algorithm with deep-learning-based histopathological breast cancer classification (AOADL-HBCC) technique for healthcare decision making. The AOADL-HBCC technique employs noise removal based on median filtering (MF) and a contrast enhancement process. In addition, the presented AOADL-HBCC technique applies an AOA with a SqueezeNet model to derive feature vectors. Finally, a deep belief network (DBN) classifier with an Adamax hyperparameter optimizer is applied for the breast cancer classification process. In order to exhibit the enhanced breast cancer classification results of the AOADL-HBCC methodology, this comparative study states that the AOADL-HBCC technique displays better performance than other recent methodologies, with a maximum accuracy of 96.77%

    Hyperparameter Optimizer with Deep Learning-Based Decision-Support Systems for Histopathological Breast Cancer Diagnosis

    No full text
    Histopathological images are commonly used imaging modalities for breast cancer. As manual analysis of histopathological images is difficult, automated tools utilizing artificial intelligence (AI) and deep learning (DL) methods should be modelled. The recent advancements in DL approaches will be helpful in establishing maximal image classification performance in numerous application zones. This study develops an arithmetic optimization algorithm with deep-learning-based histopathological breast cancer classification (AOADL-HBCC) technique for healthcare decision making. The AOADL-HBCC technique employs noise removal based on median filtering (MF) and a contrast enhancement process. In addition, the presented AOADL-HBCC technique applies an AOA with a SqueezeNet model to derive feature vectors. Finally, a deep belief network (DBN) classifier with an Adamax hyperparameter optimizer is applied for the breast cancer classification process. In order to exhibit the enhanced breast cancer classification results of the AOADL-HBCC methodology, this comparative study states that the AOADL-HBCC technique displays better performance than other recent methodologies, with a maximum accuracy of 96.77%

    Anomaly Detection in Pedestrian Walkways for Intelligent Transportation System Using Federated Learning and Harris Hawks Optimizer on Remote Sensing Images

    No full text
    Anomaly detection in pedestrian walkways is a vital research area that uses remote sensing, which helps to optimize pedestrian traffic and enhance flow to improve pedestrian safety in intelligent transportation systems (ITS). Engineers and researchers can formulate more potential techniques and tools with the power of computer vision (CV) and machine learning (ML) for mitigating potential safety hazards and identifying anomalies (i.e., vehicles) in pedestrian walkways. The real-world challenges of scenes and dynamics of environmental complexity cannot be handled by the conventional offline learning-based vehicle detection method and shallow approach. With recent advances in deep learning (DL) and ML areas, authors have found that the image detection issue ought to be devised as a two-class classification problem. Therefore, this study presents an Anomaly Detection in Pedestrian Walkways for Intelligent Transportation Systems using Federated Learning and Harris Hawks Optimizer (ADPW-FLHHO) algorithm on remote sensing images. The presented ADPW-FLHHO technique focuses on the identification and classification of anomalies, i.e., vehicles in the pedestrian walkways. To accomplish this, the ADPW-FLHHO technique uses the HybridNet model for feature vector generation. In addition, the HHO approach is implemented for the optimal hyperparameter tuning process. For anomaly detection, the ADPW-FLHHO technique uses a multi deep belief network (MDBN) model. The experimental results illustrated the promising performance of the ADPW-FLHHO technique over existing models with a maximum AUC score of 99.36%, 99.19%, and 98.90% on the University of California San Diego (UCSD) Ped1, UCSD Ped2, and avenue datasets, respectively. Therefore, the proposed model can be employed for accurate and automated anomaly detection in the ITS environment
    corecore